13 research outputs found

    Sparse-to-Continuous: Enhancing Monocular Depth Estimation using Occupancy Maps

    Full text link
    This paper addresses the problem of single image depth estimation (SIDE), focusing on improving the quality of deep neural network predictions. In a supervised learning scenario, the quality of predictions is intrinsically related to the training labels, which guide the optimization process. For indoor scenes, structured-light-based depth sensors (e.g. Kinect) are able to provide dense, albeit short-range, depth maps. On the other hand, for outdoor scenes, LiDARs are considered the standard sensor, which comparatively provides much sparser measurements, especially in areas further away. Rather than modifying the neural network architecture to deal with sparse depth maps, this article introduces a novel densification method for depth maps, using the Hilbert Maps framework. A continuous occupancy map is produced based on 3D points from LiDAR scans, and the resulting reconstructed surface is projected into a 2D depth map with arbitrary resolution. Experiments conducted with various subsets of the KITTI dataset show a significant improvement produced by the proposed Sparse-to-Continuous technique, without the introduction of extra information into the training stage.Comment: Accepted. (c) 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other work

    Usability Study of a Control Framework for an Intelligent Wheelchair

    Get PDF
    We describe the development and assessment of a computer controlled wheelchair called the SMARTCHAIR. A shared control framework with different levels of autonomy allows the human operator to stay in complete control of the chair at each level while ensuring her safety. The framework incorporates deliberative motion plans or controllers, reactive behaviors, and human user inputs. At every instant in time, control inputs from these three different sources are blended continuously to provide a safe trajectory to the destination, while allowing the human to maintain control and safely override the autonomous behavior. In this paper, we present usability experiments with 50 participants and demonstrate quantitatively the benefits of human-robot augmentation

    Semantic SuperPoint: A Deep Semantic Descriptor

    Full text link
    Several SLAM methods benefit from the use of semantic information. Most integrate photometric methods with high-level semantics such as object detection and semantic segmentation. We propose that adding a semantic segmentation decoder in a shared encoder architecture would help the descriptor decoder learn semantic information, improving the feature extractor. This would be a more robust approach than only using high-level semantic information since it would be intrinsically learned in the descriptor and would not depend on the final quality of the semantic prediction. To add this information, we take advantage of multi-task learning methods to improve accuracy and balance the performance of each task. The proposed models are evaluated according to detection and matching metrics on the HPatches dataset. The results show that the Semantic SuperPoint model performs better than the baseline one.Comment: Paper accepted at the 19th IEEE Latin American Robotics Symposium - LARS 202

    Learning to Race through Coordinate Descent Bayesian Optimisation

    Full text link
    In the automation of many kinds of processes, the observable outcome can often be described as the combined effect of an entire sequence of actions, or controls, applied throughout its execution. In these cases, strategies to optimise control policies for individual stages of the process might not be applicable, and instead the whole policy might have to be optimised at once. On the other hand, the cost to evaluate the policy's performance might also be high, being desirable that a solution can be found with as few interactions as possible with the real system. We consider the problem of optimising control policies to allow a robot to complete a given race track within a minimum amount of time. We assume that the robot has no prior information about the track or its own dynamical model, just an initial valid driving example. Localisation is only applied to monitor the robot and to provide an indication of its position along the track's centre axis. We propose a method for finding a policy that minimises the time per lap while keeping the vehicle on the track using a Bayesian optimisation (BO) approach over a reproducing kernel Hilbert space. We apply an algorithm to search more efficiently over high-dimensional policy-parameter spaces with BO, by iterating over each dimension individually, in a sequential coordinate descent-like scheme. Experiments demonstrate the performance of the algorithm against other methods in a simulated car racing environment.Comment: Accepted as conference paper for the 2018 IEEE International Conference on Robotics and Automation (ICRA

    Integrating Human Inputs with Autonomous Behaviors on an Intelligent Wheelchair Platform

    Get PDF
    Researchers have developed and assessed a computer-controlled wheelchair called the Smart Chair. A shared control framework has different levels of autonomy, allowing the human operator complete control of the chair at each level while ensuring the user\u27s safety. The semiautonomous system incorporates deliberative motion plans or controllers, reactive behaviors, and human user inputs. At every instant in time, control inputs from three sources are integrated continuously to provide a safe trajectory to the destination. Experiments with 50 participants demonstrate quantitatively and qualitatively the benefits of human-robot augmentation in three modes of operation: manual, autonomous, and semiautonomous. This article is part of a special issue on Interacting with Autonomy

    Incorporating User Inputs in Motion Planning for a Smart Wheelchair

    Get PDF
    We describe the development and assessment of a computer controlled wheelchair equipped with a suite of sensors and a novel interface, called the SMARTCHAIR. The main focus of this paper is a shared control framework which allows the human operator to interact with the chair while it is performing an autonomous task. At the highest level, the autonomous system is able to plan paths using high level deliberative navigation behaviors depending on destinations or waypoints commanded by the user. The user is able to locally modify or override previously commanded autonomous behaviors or plans. This is possible because of our hierarchical control strategy that combines three independent sources of control inputs: deliberative plans obtained from maps and user commands, reactive behaviors generated by stimuli from the environment, and user-initiated commands that might arise during the execution of a plan or behavior. The framework we describe ensures the user\u27s safety while allowing the user to be in complete control of a potentially autonomous system
    corecore